skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Frati, Lapo"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This work identifies a simple pre-training mechanism that leads to representations exhibiting better continual and transfer learning. This mechanism—the repeated resetting of weights in the last layer, which we nickname “zapping”—was originally designed for a meta-continual-learning procedure, yet we show it is surprisingly applicable in many settings beyond both meta-learning and continual learning. In our experiments, we wish to transfer a pre-trained image classifier to a new set of classes, in few shots. We show that our zapping procedure results in improved transfer accuracy and/or more rapid adaptation in both standard fine-tuning and continual learning settings, while being simple to implement and computationally efficient. In many cases, we achieve performance on par with state of the art meta-learning without needing the expensive higher-order gradients by using a combination of zapping and sequential learning. An intuitive explanation for the effectiveness of this zapping procedure is that representations trained with repeated zapping learn features that are capable of rapidly adapting to newly initialized classifiers. Such an approach may be considered a computationally cheaper type of, or alternative to, meta-learning rapidly adaptable features with higher-order gradients. This adds to recent work on the usefulness of resetting neural network parameters during training, and invites further investigation of this mechanism. 
    more » « less
  2. In environments that vary frequently and unpredictably, bet-hedgers can overtake the population. Diversifying bet-hedgers have a diverse set of offspring so that, no matter the conditions they find themselves in, at least some offspring will have high fitness. In contrast, conservative bet-hedgers have a set of offspring that all have an in-between phenotype compared to the specialists. Here, we use an evolutionary algorithm of gene regulatory networks to de novo evolve the two strategies and investigate their relative success in different parameter settings. We found that diversifying bet-hedgers almost always evolved first, but then eventually got outcompeted by conservative bet-hedgers. We argue that even though similar selection pressures apply to the two bet-hedger strategies, conservative bet-hedgers could win due to the robustness of their evolved networks, in contrast to the sensitive networks of the diversifying bet-hedgers. These results reveal an unexplored aspect of the evolution of bet-hedging that could shed more light on the principles of biological adaptation in variable environmental conditions. 
    more » « less
  3. Catastrophic forgetting continues to severely restrict the learnability of controllers suitable for multiple task environments. Efforts to combat catastrophic forgetting reported in the literature to date have focused on how control systems can be updated more rapidly, hastening their adjustment from good initial settings to new environments, or more circumspectly, suppressing their ability to overfit to any one environment. When using robots, the environment includes the robot's own body, its shape and material properties, and how its actuators and sensors are distributed along its mechanical structure. Here we demonstrate for the first time how one such design decision (sensor placement) can alter the landscape of the loss function itself, either expanding or shrinking the weight manifolds containing suitable controllers for each individual task, thus increasing or decreasing their probability of overlap across tasks, and thus reducing or inducing the potential for catastrophic forgetting. 
    more » « less